Mirror descent is a gradient descent method that uses a dual space of parametric models. The great idea has been developed in convex optimization, but not yet widely applied in machine learning. In this study, we provide a possible way that the mirror descent can help data-driven parameter initialization of neural networks. We adopt the Hopfield model as a prototype of neural networks, we demonstrate that the mirror descent can train the model more effectively than the usual gradient descent with random parameter initialization.
translated by 谷歌翻译
In optimization-based approaches to inverse problems and to statistical estimation, it is common to augment the objective with a regularizer to address challenges associated with ill-posedness. The choice of a suitable regularizer is typically driven by prior domain information and computational considerations. Convex regularizers are attractive as they are endowed with certificates of optimality as well as the toolkit of convex analysis, but exhibit a computational scaling that makes them ill-suited beyond moderate-sized problem instances. On the other hand, nonconvex regularizers can often be deployed at scale, but do not enjoy the certification properties associated with convex regularizers. In this paper, we seek a systematic understanding of the power and the limitations of convex regularization by investigating the following questions: Given a distribution, what are the optimal regularizers, both convex and nonconvex, for data drawn from the distribution? What properties of a data source govern whether it is amenable to convex regularization? We address these questions for the class of continuous and positively homogenous regularizers for which convex and nonconvex regularizers correspond, respectively, to convex bodies and star bodies. By leveraging dual Brunn-Minkowski theory, we show that a radial function derived from a data distribution is the key quantity for identifying optimal regularizers and for assessing the amenability of a data source to convex regularization. Using tools such as $\Gamma$-convergence, we show that our results are robust in the sense that the optimal regularizers for a sample drawn from a distribution converge to their population counterparts as the sample size grows large. Finally, we give generalization guarantees that recover previous results for polyhedral regularizers (i.e., dictionary learning) and lead to new ones for semidefinite regularizers.
translated by 谷歌翻译
In this work, we focus on the problem of safe policy transfer in reinforcement learning: we seek to leverage existing policies when learning a new task with specified constraints. This problem is important for safety-critical applications where interactions are costly and unconstrained policies can lead to undesirable or dangerous outcomes, e.g., with physical robots that interact with humans. We propose a Constrained Markov Decision Process (CMDP) formulation that simultaneously enables the transfer of policies and adherence to safety constraints. Our formulation cleanly separates task goals from safety considerations and permits the specification of a wide variety of constraints. Our approach relies on a novel extension of generalized policy improvement to constrained settings via a Lagrangian formulation. We devise a dual optimization algorithm that estimates the optimal dual variable of a target task, thus enabling safe transfer of policies derived from successor features learned on source tasks. Our experiments in simulated domains show that our approach is effective; it visits unsafe states less frequently and outperforms alternative state-of-the-art methods when taking safety constraints into account.
translated by 谷歌翻译
本文提出了秤,这是一个一般框架,将公平原则转化为基于约束马尔可夫决策过程(CMDP)的共同表示。借助因果语言,我们的框架可以在决策过程(程序公平)以及决策(结果公平)产生的结果上构成限制。具体而言,我们表明可以将众所周知的公平原理编码为实用程序组件,非毒性组件或鳞片中心中的因果分量。我们使用涉及模拟医疗方案和现实世界中Compas数据集的一组案例研究来说明量表。实验表明,我们的框架产生了公平的政策,这些政策在单步和顺序决策方案中体现了替代公平原则。
translated by 谷歌翻译
角度分辨光发射光谱(ARPES)技术的最新发展涉及空间分辨样品,同时保持动量空间的高分辨率特征。这种开发很容易扩大数据大小及其复杂性以进行数据分析,其中之一是标记类似的分散剪辑并在空间上绘制它们。在这项工作中,我们证明了代表性学习(自我监督学习)模型的最新发展与K均值聚类相结合可以帮助自动化数据分析的一部分并节省宝贵的时间,尽管表现较低。最后,我们在代表空间中介绍了几次学习(k-nearest邻居或KNN),在该空间中,我们有选择地选择一个(k = 1)每个已知标签的图像参考,随后将其余的数据标记为最接近的参考图片。最后一种方法证明了自我监督的学习的强度,特别是在ARPE中自动化图像分析,并且可以推广到任何涉及图像数据的科学数据分析中。
translated by 谷歌翻译
本文收集了提交给核心挑战2022的求解器和ISR实例的所有描述。
translated by 谷歌翻译
随着在充满挑战的环境中越来越需要多机器人探索未知区域的需求,需要有效的协作探索策略来实现此类壮举。可以部署基于边界的快速探索随机树(RRT)探索来探索未知的环境。然而,它的贪婪行为导致多个机器人探索收入最高的地区,从而导致勘探过程中大规模重叠。为了解决这个问题,我们提出了基于时间内存的RRT(TM-RRT)探索策略,用于多机器人在未知环境中执行强大的探索。它根据每个机器人的相对位置计算分配的每个边界的自适应持续时间,并计算边界的收入。此外,每个机器人都配备了由分配的边界和舰队共享的内存,以防止重复对同一边界的分配。通过模拟和实际部署,我们通过在25.0m x 540m(1350.0m2)区域完成勘探,展示了TM-RRT勘探策略的鲁棒性,而常规的RRT勘探策略则不足。
translated by 谷歌翻译
有许多基于深卷卷神经网络(CNN)的图像恢复方法。但是,有关该主题的大多数文献都集中在网络体系结构和损失功能上,而对培训方法的详细介绍。因此,某些作品不容易重现,因为需要了解隐藏的培训技巧才能获得相同的结果。要具体说明培训数据集,很少有作品讨论了如何准备和订购培训图像补丁。此外,捕获新数据集以训练现实世界中的恢复网络需要高昂的成本。因此,我们认为有必要研究培训数据的准备和选择。在这方面,我们对训练贴片进行了分析,并探讨了不同斑块提取方法的后果。最终,我们提出了从给定训练图像中提取补丁的指南。
translated by 谷歌翻译
本文提出了图像恢复的新变异推理框架和一个卷积神经网络(CNN)结构,该结构可以解决所提出的框架所描述的恢复问题。较早的基于CNN的图像恢复方法主要集中在网络体系结构设计或培训策略上,具有非盲方案,其中已知或假定降解模型。为了更接近现实世界的应用程序,CNN还接受了整个数据集的盲目培训,包括各种降解。然而,给定有多样化的图像的高质量图像的条件分布太复杂了,无法通过单个CNN学习。因此,也有一些方法可以提供其他先验信息来培训CNN。与以前的方法不同,我们更多地专注于基于贝叶斯观点以及如何重新重新重构目标的恢复目标。具体而言,我们的方法放松了原始的后推理问题,以更好地管理子问题,因此表现得像分裂和互动方案。结果,与以前的框架相比,提出的框架提高了几个恢复问题的性能。具体而言,我们的方法在高斯denoising,现实世界中的降噪,盲图超级分辨率和JPEG压缩伪像减少方面提供了最先进的性能。
translated by 谷歌翻译
以数据为中心的AI调用更好,不仅仅是更大,数据集。随着全球范围内具有额外领土范围的数据保护法,确保数据集是法律是“更好”的越来越重要的但被忽视的组成部分。为了帮助DataSet Builders更加愿意,能够导航这种复杂的法律空间,本文审查了ML数据集周围的关键法律义务,研究了数据法对ML管道上的实际影响,并为建立法律数据集提供了框架。
translated by 谷歌翻译